49 research outputs found

    Context-based Normalization of Histological Stains using Deep Convolutional Features

    Full text link
    While human observers are able to cope with variations in color and appearance of histological stains, digital pathology algorithms commonly require a well-normalized setting to achieve peak performance, especially when a limited amount of labeled data is available. This work provides a fully automated, end-to-end learning-based setup for normalizing histological stains, which considers the texture context of the tissue. We introduce Feature Aware Normalization, which extends the framework of batch normalization in combination with gating elements from Long Short-Term Memory units for normalization among different spatial regions of interest. By incorporating a pretrained deep neural network as a feature extractor steering a pixelwise processing pipeline, we achieve excellent normalization results and ensure a consistent representation of color and texture. The evaluation comprises a comparison of color histogram deviations, structural similarity and measures the color volume obtained by the different methods.Comment: In: 3rd Workshop on Deep Learning in Medical Image Analysis (DLMIA 2017

    The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience

    Get PDF
    With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line

    Die neue Kartieranleitung zur Erfassung aktueller Wassererosionsformen

    Get PDF
    Das DVWK-Merkblatt 239 „Bodenerosion durch Wasser – Kartieranleitung zur Erfassung aktueller Erosionsformen“ bietet seit 1996 den Standard für die Erosionskartierung für angewandte und wissenschaftliche Fragestellungen. Diese Kartieranleitung ermöglicht die Einschätzung von Erosionsursachen, -auswirkungen und -schäden und wird in der Wasserwirtschaft, dem Bodenschutz und in der Landwirtschaft angewendet. Allerdings haben sich Erfordernisse und Nutzungsmöglichkeiten inzwischen verändert. So müssen beispielsweise für die digitale Umsetzung der Symbole und Parameter in der Datenverarbeitung eindeutige Schlüssel gefunden werden. Neue technische Werkzeuge bei der Aufnahme der Geländeparameter - wie der Einsatz von Koptern und Airborne Laser Scanning (ALS) - sind zu berücksichtigen. Darüber hinaus gelten inzwischen neue rechtliche Regelungen, welche die Bedeutung der Erosionskartierung hervorheben. Eine Arbeitsgruppe der Deutschen Vereinigung für Wasserwirtschaft, Abwasser und Abfall und des Bundesverbands Boden befasst sich mit der Aktualisierung der Kartieranleitung und stellt den Entwurf der Neufassung vor

    iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    Get PDF
    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu

    The Ontology for Biomedical Investigations

    Get PDF
    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed in association with OBI. The current release of OBI is available at http://purl.obolibrary.org/obo/obi.owl

    Oculus Rift Control of a Mobile Robot : Providing a 3D Virtual Reality Visualization for TeleoperationorHow to Enter a Robots Mind

    No full text
    Robots are about to make their way into society. Whether one speaksabout robots as co-workers in industry, as support in hospitals, in elderlycare, selfdriving cars, or smart toys, the number of robots is growing continuously.Scaled somewhere between remote control and full-autonomy,all robots require supervision in some form. This thesis connects theOculus Rift virtual reality goggles to a mobile robot, aiming at a powerfulvisualization and teleoperation tool for supervision or teleassistance,with an immersive virtual reality experience. The system is tested ina user study to evaluate the human-robot interaction and obtain anintuition about the situation awareness of the participants

    Digital histopathology : Image processing for histological analyses and immune response quantification

    No full text
    Digital pathology is an emerging field which requires dedicated algorithms for the automated analysis of digital histopathological images as presented in this thesis. Histopathological workflows analyze biomedical tissue samples on macroscopic and microscopic level, which requires e.g. cell nuclei counting, morphological inspection of cells and cell nuclei, or the analysis of the tissue composition. These analysis tasks are necessary for both the diagnosis of cancer and research related to a better understanding of tumor characteristics, growth and immune system interaction. Previously applied analog histopathological workflows may suffer from a subjective rating resulting from different levels of experience of the pathologist as well as fatigue (of the visual system) during the analysis of large amounts of brightfield microscopy images. Facilitated by the availability of whole-slide scanning systems, digitized microscopy images can easily be obtained and utilized to automate tasks. Current challenges arising from whole-slide images (WSI) are the massive data size which are usually in the range of gigapixels, color variations due to the staining procedure, and the inherently strong biological variance of biomedical tissue. The methods presented in this work address these challenges in order to provide scalable and reliable solutions for an algorithmic analysis of tissues and immune responses. Specifically, this thesis addresses the questions how to handle the data and file sizes, how to appropriately deal with inherent sources of variation, how to achieve fast and reproducible analysis results, and how to communicate the reliability of the algorithmic results to the medical expert. Regarding the massive data file sizes, this thesis presents an adaptation of a new compression scheme that achieves superior compression ratios compared to the standard used in current WSI file formats. For accelerating the processing, a fast and reliable foreground segmentation is introduced, which prevents brightfield microscopic background patches from entering computationally complex processing steps. Moreover, a stain normalization algorithm is developed that incorporates deep features from a pretrained neural network, which results in excellent normalization performance. However, the simultaneous development of stain augmentation raises the question if normalization as a preprocessing step is still required in conjunction with deep learning. As this work heavily builds on modern deep learning algorithms, challenges regarding the amount of labeled data needed for training have to be addressed. To this end, a method for texture augmentation via image quilting is introduced that performs an unsupervised realistic rearrangement of image patches and ground truth labels. It is shown in a classification scenario that this augmentation can lead to an improvement of tissue classification accuracy for underrepresented classes. Further contributions are workflows for tissue classification and cell nuclei detection. In both cases classical approaches and deep learning methods are considered and evaluated. At the price of high training times and expensive hardware, the deep learning approaches clearly outperform classical methods. On large and diverse tissue datasets, the prediction of tissue maps with deep learning achieves an F1-score of 84% exceeding the classical baseline method by nearly 28%. Besides a superior performance, the presented deep learning architecture offers probabilistic predictions via runtime dropout that can be utilized as a quality control for the result. This partially alleviates the black-box decision making that neural networks are often criticized for, as it improves the indication of reliable and unreliable predictions. In the cell nuclei detection and localization task, F1-scores of up to 88% are achieved, with a less pronounced, but still notable, difference of 8% in favor of the deep learning approaches. In this task, the main contribution is a novel augmentation that enables the transformation of a very basic ellipse model for cell nuclei simulation into realistic cell nuclei images. Herein, the challenge to preserve the classes “standard cell nuclei” and “immune cell nuclei” is crucial to train networks for immune response quantification. This is achieved by a novel loss function which provides a trade-off between enforcing consistency of the transform and the freedom to adapt the nucleus model locally. Furthermore, a broader view on cell nuclei localization is provided, where an attempt to generalize the concept to other microscopy modalities and stains is made by defining network layers and architectures that are capable of adapting to the respective modality or WSI stain. However, it is shown that these additional capabilities do not further improve the results of cell nuclei analysis in histopathology, while tasks in other image processing domains benefit from the approach. Together with the considerations regarding stain normalization and augmentation, this indicates a necessary shift in the way tasks have to be approached when working with deep neural networks. Finally, this thesis incorporates the different methods into a digital pathology workflow and exemplifies their intended use in a single-mouse-trial study. This use case defines a potential application and showcases a setup comprising preprocessing, probabilistic classification, visualization and quality assurance, the definition of meta features and methods for the graphical and quantitative assessment of tumor characteristics. This finally illustrates the benefits of the proposed approaches in a real-world preclinical research application
    corecore